A hybrid time-frequency domain articulatory speech synthesizer
نویسندگان
چکیده
High quality speech at low bit rates (e.g., 2400 bits/s) is one of the important objectives of current speech research. As part of long range activity on this problem, we have developed an efficient computer program that will serve as a tool for investigating whether articulatory speech synthesis may achieve this low bit rate. At a sampling frequency of 8 kHz, the most comprehensive version of the program , including nasality and frication, runs at about twice real time on a Cray-1 computer. L INTRODUCTION OW bit rate coding of speech (Flanagan et al. [l]) is an important objective of current speech research. There are essentially three different methods. used for speech synthesis from low bit rate data (e.g., Flanagan [2] and Linggard [3]): formant synthesis, synthesis from linear prediction coefficients (LPC), and articulatory speech synthesis. Formant synthesis models the spectrum of speech while linear prediction models the signal wave-form using correlation techniques. For these methods there exist accompanying analysis procedures for obtaining the low bit rate data directly from the speech input. Articulatory synthesis models the speech production mechanism directly. At present, however, there is no appropriate analysis procedure which satisfactorily solves the related " inverse " problem of obtaining .articulatory parameters from spoken utterances (for preliminary at
منابع مشابه
Session 2aSC: Linking Perception and Production (Poster Session) 2aSC55. Speech sensorimotor learning through a virtual vocal tract
Studies of speech sensorimotor learning often manipulate auditory feedback by modifying isolated acoustic parameters such as formant frequency or fundamental frequency using near real-time resynthesis of a participant's speech. An alternative approach is to engage a participant in a total remapping of the sensorimotor working space using a virtual vocal tract. To support this approach for study...
متن کاملSimulation of disordered speech using a frequency-domain vocal tract model
In this paper, we address the issue of how the perception of disorderness in selected types of speech disorders may be correlated with the abnormal articulatory structure and with the related acoustic properties. As a first step towards this end we have developed an articulatory synthesizer based on frequency-domain simulation of vocal-tract wave propagation. The synthesizer has been implemente...
متن کاملImitating Conversational Laughter with an Articulatory Speech Synthesizer
In this study we present initial efforts to model laughter with an articulatory speech synthesizer. We aimed at imitating a real laugh taken from a spontaneous speech database and created several synthetic versions of it using articulatory synthesis and diphone synthesis. In modeling laughter with articulatory synthesis, we also approximated features like breathing noises that do not normally o...
متن کاملReal-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be contro...
متن کاملKnowledge from Speech Production Used in Speech Technology: Articulatory Synthesis*
There appears to be a continuing trend toward incorporating knowledge of speech production into s~eech technology-text-to-speech synthesis (e.g., BIckley, Stevens, & Williams, 1994; Parthasarthy & Coker, 1992), low bit rate coding (see Schroeter & Sondhi, 1992), and automatic speech recognition (e.g., Rose, Schroeter, & Sondhi, 1994; Shirai & Kobayashi, 1986). For automatic speech recognition, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Acoustics, Speech, and Signal Processing
دوره 35 شماره
صفحات -
تاریخ انتشار 1987